OD区域对之间的原点污染(OD)矩阵记录定向流数据。矩阵中复杂的时空依赖性使OD矩阵预测(ODMF)问题不仅可以棘手,而且是非平凡的。但是,大多数相关方法都是为在特定的应用程序方案中预测非常短的序列时间序列而设计的,在特定的应用程序场景中,该方法无法满足方案和预测实用应用长度的差异要求。为了解决这些问题,我们提出了一个名为Odformer的类似变压器的模型,具有两个显着特征:(i)新型的OD注意机制,该机制捕获了相同起源(目的地)之间的特殊空间依赖性,可大大提高与捕获OD区域之间空间依赖关系的2D-GCN结合后,预测交叉应用方案的模型。 (ii)一个时期的自我注意力,可以有效地预测长序列OD矩阵序列,同时适应不同情况下的周期性差异。在三个应用程序背景(即运输流量,IP骨干网络流量,人群流)中进行的慷慨实验表明,我们的方法的表现优于最新方法。
translated by 谷歌翻译
在数字组织病理学分析中,污渍变化通常会降低基于深度学习的方法的概括能力。两项单独的建议,即染色标准化(SN)和染色增强(SA),已聚焦以减少概括错误,在此,前者使用模板图像减轻了不同医疗中心的污渍转移,后者则丰富了后者的污渍样式,并通过污染中心的误差。模拟更多的污渍变化。但是,它们的应用是由选择模板图像和不现实样式的构建的界定。为了解决这些问题,我们将SN和SA与新颖的Randstainna方案统一,该方案在可行的范围内限制了可变污渍样式,以训练污渍不可知论的深度学习模型。 Randstainna适用于在颜色空间集合中染色归一化,即HED,HSV,实验室。此外,我们提出了一个随机的颜色空间选择方案,以提高性能。我们通过两个诊断任务,即具有各种网络骨架的诊断任务,即组织亚型分类和核分割。拟议的Randstainna可以始终如一地提高概括能力,使我们的模型可以应对具有不可预测的污渍样式的更传入的临床数据集,因此所提出的Randstainna的性能优势可以始终如一地提高概括能力。这些代码可从https://github.com/yiqings/randstainna获得。
translated by 谷歌翻译
最近,模型驱动的深度学习通过用网络模块替换符号器的一阶信息(即(子)梯度或近端运算符)来拓展到级联网络中的一定迭代算法,该算法呈现出更可说明的与常见的数据驱动网络相比,可以预测。相反,理论上,不一定存在这样的功能常规程序,其一级信息与替换的网络模块匹配,这意味着网络输出可能不被原始正则化模型覆盖。此外,到目前为止,在现实假设下,也没有保证展开网络的全球收敛性和鲁棒性(规律性)。为了弥合这一差距,本文建议在展开网络上提出保障方法。具体而言,专注于加速MRI,我们展开了一个零阶算法,网络模块代表常规器本身,使得网络输出可以仍然被正则化模型覆盖。此外,受到深度均衡模型的理想的启发,在反向化之前,我们执行了展开的迭代网络,以收敛到一个固定点,以确保收敛。如果测量数据包含噪声,我们证明了所提出的网络对嘈杂干扰具有强大。最后,数值实验表明,所提出的网络始终如一地优于最先进的MRI重建方法,包括传统的正规化方法和其他深度学习方法。
translated by 谷歌翻译
广泛观察到,在实际推荐系统中,诸如“点击框架”等“点击框架”的战略行为。通过这种行为的激励,我们在奖励的战略操纵下研究组合多武装匪徒(CMAB)的问题,其中每个臂可以为自己的兴趣修改发出的奖励信号。这种对抗性行为的表征是对先前研究的环境放松,例如对抗性攻击和对抗性腐败。我们提出了一种战略变体的组合UCB算法,其遗憾是最多的$ O(m \ log t + m b_ {max})$的战略操作,其中$ t $是时间范围,$ m $武器数量和$ b_ {max} $是手臂的最大预算。我们为武器预算提供了下限,以引起强盗算法的某些遗憾。在线工人选择对众包系统的大量实验,在线影响合成和实际数据集的最大化和在线建议,以鲁棒性和遗憾的界限提供了我们的理论发现,在各种操纵预算制度中。
translated by 谷歌翻译
Theoretical properties of bilevel problems are well studied when the lower-level problem is strongly convex. In this work, we focus on bilevel optimization problems without the strong-convexity assumption. In these cases, we first show that the common local optimality measures such as KKT condition or regularization can lead to undesired consequences. Then, we aim to identify the mildest conditions that make bilevel problems tractable. We identify two classes of growth conditions on the lower-level objective that leads to continuity. Under these assumptions, we show that the local optimality of the bilevel problem can be defined via the Goldstein stationarity condition of the hyper-objective. We then propose the Inexact Gradient-Free Method (IGFM) to solve the bilevel problem, using an approximate zeroth order oracle that is of independent interest. Our non-asymptotic analysis demonstrates that the proposed method can find a $(\delta, \varepsilon)$ Goldstein stationary point for bilevel problems with a zeroth order oracle complexity that is polynomial in $d, 1/\delta$ and $1/\varepsilon$.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial attacks. In this paper, we take the role of investigators who want to trace the attack and identify the source, that is, the particular model which the adversarial examples are generated from. Techniques derived would aid forensic investigation of attack incidents and serve as deterrence to potential attacks. We consider the buyers-seller setting where a machine learning model is to be distributed to various buyers and each buyer receives a slightly different copy with same functionality. A malicious buyer generates adversarial examples from a particular copy $\mathcal{M}_i$ and uses them to attack other copies. From these adversarial examples, the investigator wants to identify the source $\mathcal{M}_i$. To address this problem, we propose a two-stage separate-and-trace framework. The model separation stage generates multiple copies of a model for a same classification task. This process injects unique characteristics into each copy so that adversarial examples generated have distinct and traceable features. We give a parallel structure which embeds a ``tracer'' in each copy, and a noise-sensitive training loss to achieve this goal. The tracing stage takes in adversarial examples and a few candidate models, and identifies the likely source. Based on the unique features induced by the noise-sensitive loss function, we could effectively trace the potential adversarial copy by considering the output logits from each tracer. Empirical results show that it is possible to trace the origin of the adversarial example and the mechanism can be applied to a wide range of architectures and datasets.
translated by 谷歌翻译
Stance detection refers to the task of extracting the standpoint (Favor, Against or Neither) towards a target in given texts. Such research gains increasing attention with the proliferation of social media contents. The conventional framework of handling stance detection is converting it into text classification tasks. Deep learning models have already replaced rule-based models and traditional machine learning models in solving such problems. Current deep neural networks are facing two main challenges which are insufficient labeled data and information in social media posts and the unexplainable nature of deep learning models. A new pre-trained language model chatGPT was launched on Nov 30, 2022. For the stance detection tasks, our experiments show that ChatGPT can achieve SOTA or similar performance for commonly used datasets including SemEval-2016 and P-Stance. At the same time, ChatGPT can provide explanation for its own prediction, which is beyond the capability of any existing model. The explanations for the cases it cannot provide classification results are especially useful. ChatGPT has the potential to be the best AI model for stance detection tasks in NLP, or at least change the research paradigm of this field. ChatGPT also opens up the possibility of building explanatory AI for stance detection.
translated by 谷歌翻译
Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.
translated by 谷歌翻译
The past two decades have seen increasingly rapid advances in the field of multi-view representation learning due to it extracting useful information from diverse domains to facilitate the development of multi-view applications. However, the community faces two challenges: i) how to learn robust representations from a large amount of unlabeled data to against noise or incomplete views setting, and ii) how to balance view consistency and complementary for various downstream tasks. To this end, we utilize a deep fusion network to fuse view-specific representations into the view-common representation, extracting high-level semantics for obtaining robust representation. In addition, we employ a clustering task to guide the fusion network to prevent it from leading to trivial solutions. For balancing consistency and complementary, then, we design an asymmetrical contrastive strategy that aligns the view-common representation and each view-specific representation. These modules are incorporated into a unified method known as CLustering-guided cOntrastiVE fusioN (CLOVEN). We quantitatively and qualitatively evaluate the proposed method on five datasets, demonstrating that CLOVEN outperforms 11 competitive multi-view learning methods in clustering and classification. In the incomplete view scenario, our proposed method resists noise interference better than those of our competitors. Furthermore, the visualization analysis shows that CLOVEN can preserve the intrinsic structure of view-specific representation while also improving the compactness of view-commom representation. Our source code will be available soon at https://github.com/guanzhou-ke/cloven.
translated by 谷歌翻译